Our main objectives are to handle some of the common data problems encountered with marine fastloc GPS data and diving data, using a narwhal dataset provided by Dr. Marianne Marcoux. Specifically, we aim to address the following:
Irregular locations
Time gaps
Including diving data-streams
First, we’ll setup the workspace with required packages.
library(momentuHMM)
library(dplyr)
library(tidyr)
library(lubridate)
library(adehabitatLT)
library(sf)
library(tmap)
library(terra)
library(units)
library(stringr)
library(diveMove)
library(conicfit)
library(car)
library(mitools)
library(doFuture)
library(corrplot) # Pearson's correlation matrix using cor()
library(data.table)Please also set working directory to “Day2” of the HMM workshop:
For simplicity, we will also only look at the data for the month of August, 2017.
The data we obtain is often quite messy with records missing
information and other records duplicated. we can filter only records
with complete location data using !is.na(x) & !is.na(y)
and remove duplicate records (same time, location, and location class)
using the the lag function from dplyr which
will use the value from the previous row.
tracks <- tracks %>%
# remove missing locations
filter(!is.na(x) & !is.na(y),
# remove identical records
!(time == lag(time) & x == lag(x) & y == lag(y) & loc_class == lag(loc_class)))Next, we’ll convert the data to a spatial dataset using the
sf package and plot the data. First, we define the
coordinate reference system of the original data (in this case WGS84,
which is defined by the EPSG code 4326). Next, we will
project the data into NAD83(CSRS) UTM zone 21N (EPSG:2962), which will
projected the coordinates in meter units with minimal distortion for
this data set.
tracks_proj <- tracks %>%
st_as_sf(coords = c("x", "y")) %>% # converts to an sf object
st_set_crs(4326) %>% # define CRS
st_transform(2962) # reproject data to a UTMFor the first part of this tutorial, we’ll use only the fastloc GPS data so we don’t have to deal with location error.
We lose some data, particularly near the end of the tracks, but we will integrate ARGOS locations later in this tutorial.
Now, we can map the data using the tmap package to
visualize what it looks like.
# plot GPS
tracks_gps %>%
group_by(ID) %>%
summarise(do_union = FALSE) %>%
st_cast("LINESTRING") %>%
tm_shape() +
tm_lines(col = "ID", palette = "Dark2")The classic HMM assumes the observation data is in discrete time and that there is no missing data in the predictor variables. There are two key decisions we must make, the temporal resolution to use, and how to address data gaps. The desired resolution depends predominantly on the biological question you are asking as different behaviours and biological processes occur at different spatial and temporal scales (e.g. seasonal migration, daily movement between foraging and resting grounds, and fine scale foraging decisions). Generally, higher resolution data is preferred as it has more information, however it is possible to have too-high of a resolution wherein information from fine-scale variability drowns out the signal from coarse-scale patterns of interest (e.g., seasonal migration). In this case, we will be linking the movement data with high resolution (75 s) dive data to look at finer-scale behaviours (on the order of a few hours). My rule of thumb, is that you want 3-50 data points per behaviour. For behaviours spanning several hours, that roughly corresponds to a desired resolution between 2 min and 60 min.
First, let’s calculate the time difference between successive records
using difftime and lead (compares current row
to following row). For the last record of each individual (i.e., when
ID != lead(ID)), we will set to NA.
# Calculate time difference between locations
tracks_gps <- tracks_gps %>%
mutate(dt = ifelse(ID == lead(ID), # If next data row is same individual
difftime(lead(time), time, units = "mins"), NA
)) # calculate time differenceLet’s see what resolutions may be possible in the data by looking at the most frequent time gaps.
# Zoom in on short intervals
hist(tracks_gps$dt, 1000, main = NA, xlab = "Time difference (min)", xlim = c(0,100))##
## 10 11 12 22 9 13
## 400 145 90 64 63 63
We see that the most frequent time gap is 10 min, followed by 11, 12, 22, 9, and 13 min. We also see the majority of the gaps are < 60 min, however some are in excess of 600 min. Because HMMs assume there are no missing records, finer resolutions will more gaps that would need to be interpolated. Frequent and large data gaps can be difficult to handle, especially as the number of missing data points approaches or exceeds the existing data; we really want to avoid this. Let’s examine the potential data structure at different resolutions for the different animals.
We first create a function that can approximates the proportion of missing locations we would have for a given resolution.
# Make function to estimate proportion of missing location
p_na <- function(time, n_loc, res) {
time <- round_date(time, paste(res,"min")) # round to nearest resolution
time <- na.omit(time[time != lead(time)]) # remove duplicate time
# calculate maximum number of locations
max_n_loc <- length(seq(time[1], tail(time, 1) + res*60,
by = as.difftime(res, units = "mins")))
n_NA <- max_n_loc - (length(time)+1)
n_NA / max_n_loc
}We can now use this function to look at the proportion of NAs we would get with 1, 2, 5, 10, and 20 min resolution.
# summarise track dt
tracks_gps %>%
st_drop_geometry() %>%
group_by(ID) %>%
summarise(p_NA_10m = p_na(time, n_loc, 10), # 10 min
p_NA_20m = p_na(time, n_loc, 20), # 20 min
p_NA_30m = p_na(time, n_loc, 30), # 30 min
p_NA_60m = p_na(time, n_loc, 60)) # 60 min ## # A tibble: 3 × 5
## ID p_NA_10m p_NA_20m p_NA_30m p_NA_60m
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 T172062 0.486 0.236 0.191 0.152
## 2 T172064 0.502 0.203 0.120 0.0736
## 3 T172066 0.488 0.230 0.185 0.160
Here we see that the 10 min interval, around 50% of the locations would be missing. This is the limit that I would be comfortable at, since at finer resolutions, simulated data would outweigh real data, and may bias the results. Very large data gaps that contribute to much of the missing locations can be excluded from the analysis, therefore, for this tutorial, I will use a 10 min resolution.
There are several ways to deal with data gaps, and I will address four 1. Interpolation (linear or statistical) 2. Voiding data gaps 3. Path segmentation 4. Multiple imputation
For large datasets with few and small gaps, the simplest approach to use linear interpolation. First, let’s identify the most likely minute we have data.
# which minute has the most data
tracks_gps %>%
st_drop_geometry() %>% # convert back to data.frame
group_by(ID) %>%
summarise(minute = str_sub(minute(time), -1)) %>%
table()## `summarise()` has grouped output by 'ID'. You can override using the `.groups`
## argument.
## minute
## ID 0 1 2 3 4 5 6 7 8 9
## T172062 80 78 62 52 37 40 40 45 37 32
## T172064 77 68 71 67 32 32 40 32 31 38
## T172066 138 52 73 48 44 42 37 27 22 18
It looks like for all three tracks, the most amount of locations fall on 0 min (i.e., 10, 20, 30, 40, 50, or 60 min). Next, for each track, we will create a vector of times in which to estimate locations.
# convert tracks back to data.frame with xy coordinates
tracks_gps_ls <- tracks_gps %>%
mutate(x = st_coordinates(tracks_gps)[,"X"],
y = st_coordinates(tracks_gps)[,"Y"]) %>%
st_drop_geometry() %>%
split(.,.$ID) # split into list
# create full time series on which to estimate locations rounded to the nearest 10 min
tracks_gps_ls_time <- tracks_gps %>%
st_drop_geometry() %>% # convert to data.frame
group_by(ID) %>%
summarise(time = seq(round_date(first(time), "10 min"),
round_date(last(time), "10 min"),
by = 60*10)) %>%
split(.,.$ID) # split into list## `summarise()` has grouped output by 'ID'. You can override using the `.groups`
## argument.
Now, we can interpolate the locations for each track.
# function to create a data frame with approximated locations
approx_locs <- function(tracks, times){
data.frame(ID = times$ID,
time = times$time,
x = approx(tracks$time, tracks$x,
xout = times$time)$y,
y = approx(tracks$time, tracks$y,
xout = times$time)$y)
}
# Interpolate the location at the times from the sequence
tracks_gps_linear <- mapply(approx_locs, tracks_gps_ls, tracks_gps_ls_time,
SIMPLIFY = F) %>%
do.call("rbind", .) # convert list of tracks back to a single data.frame
# remove row names added by rbind
rownames(tracks_gps_linear) <- NULL
# plot locations
plot(tracks_gps_linear$x, tracks_gps_linear$y, pch = 20, col = "red", xlab = "x", ylab = "y", asp = 1)
points(st_coordinates(tracks_gps)[,"X"], st_coordinates(tracks_gps)[,"Y"], pch = 20)Looks like it works. Let’s try fitting an HMM to this. First, lets
prepare the data using prepData and plot the data to
estimate starting parameters.
Note how you can see the sections where we linearly interpolated
locations as a horizontal segment in step-length
# Setting up the starting values
mu0 <- c(50, 500) # Mean step length
sigma0 <- mu0 # Sd of the step length
kappa0 <- c(0.1, 1) # Turning angle concentration parameter (kappa > 0)Ok, were are ready. Let’s fit the HMM
set.seed(1)
# Fit a 2 state HMM
HMM_gps_linear <- fitHMM(prep_gps_linear, nbState = 2, dist = list(step = "gamma",
angle = "vm"), Par0 = list(step = c(mu0, sigma0), angle = kappa0))
plot(HMM_gps_linear, ask = FALSE)## Decoding state sequence... DONE
## Computing pseudo-residuals...
The plotted states look relatively ok, however there is quite a weird flat section in the QQ-plot of the turning angle. In data with large linearly-interpolated gaps, the model often represents the original dynamic data as one state, and defines the straight interpolated segments with a second state. This is a common problem when data gaps are frequent or large such that the information in the interpolated data outweighs the signal from the original observations. Generally, I would only use linear interpolation when data gaps are small (< 3 locations) or relatively infrequent (< 20 % of the modelled locations). In our data, some gaps are > 5 hours (30 locations) and > 50% of the modelled locations are interpolated. So we need to use another approach.The flat section in the QQ-plot around 0 emerges because the model overestimates the values just below the middle quantile (in our case, \(0^\circ\)) and just above the middle quantile. In other words, our model is not capturing the large amount of turning angles really close to \(0^\circ\) (this may be considered a good thing in our case since we know these are linearly interpolated).
A slightly better way to interpolate locations is to fit a correlated
random walk (CRW) model to the data and predict the most likely
locations. momentuHMM contains wrapper functions to
interpolate missing locations by fitting continuous-time CRW to the data
based on the crawl package by Devin Johnson and Josh
London. There are many options to fit the crawl model, and
a detailed tutorial for analysis with crawl is available
here: https://jmlondon.github.io/crawl-workshop/crawl-practical.html.
Let’s try to fit the most basic model using the wrapper function
crawlWrap.
set.seed(1) # crawl often fails, so I recommend always setting a seed
# fit crawl
crw_gps_10 <- crawlWrap(obsData = tracks_gps, timeStep = "10 mins")## Fitting 3 track(s) using crawl::crwMLE...
## DONE
##
## Predicting locations (and uncertainty) at 10 mins time steps for 3 track(s) using crawl::crwPredict... DONE
Notice how the predicted tracks do not make perfectly straight lines through missing sections (particularly noticeable in T172062). Next, we will extract the predicted locations and add them to the observed data.
# filter predicted locations
tracks_gps_crw <- data.frame(crw_gps_10$crwPredict) %>%
filter(locType == "p") %>%
dplyr::select(mu.x, mu.y, time,
ID) %>%
dplyr::rename(x = "mu.x", y = "mu.y")Now, let’s try to fit the same HMM as before on this data using the same starting parameters.
set.seed(1)
# prep data
prep_gps_crw <- prepData(tracks_gps_crw, type = "UTM")
# Setting up the starting values
mu0 <- c(50, 500) # Mean step length
sigma0 <- c(50, 500) # Sd of the step length
kappa0 <- c(0.1, 1) # Turning angle concentration parameter (kappa > 0)
# Fit a 2 state HMM
HMM_gps_crw <- fitHMM(prep_gps_crw, nbState = 2, dist = list(step = "gamma",
angle = "vm"), Par0 = list(step = c(mu0, sigma0), angle = kappa0))
plot(HMM_gps_crw, ask = FALSE)## Decoding state sequence... DONE
That’s looking much better. It looks like state 1 represents a low-speed, high tortuosity resident state, while state 2 represents higher-speed, low tortuosity travelling state.
In many instances, this model may be sufficient. However, the significant proportion of interpolated locations used to fit the model is likely to affect our results. For example, the large interpolated gaps are still relatively straightened out and a very consistent speed, and may skew the definition of state 2 in particular. Simply interpreting the results with this issue in mind can be adequate when we only have few interpolated locations. Below, we will explore more formal ways to address this problem, which can be particularly useful when we have very large numbers of interpolated locations.
One strategy to address large data gaps is to void the data streams
(i.e., step length and turning angle) during moderate/large interpolated
gaps where we expect that the estimated movement has is largely
independent of the observed data before or after the gap. The maximum
size of a gap to allow depends on the frequency of the missing data,
frequency of locations, study species, and behaviours of interest. In
this case, I will void the data streams from locations predicting in
gaps \(>60\) min. First, we will
identify which steps need to be voided, then we will prepare the data
and void the estimated step and angle data
streams. We will do this again later in the tutorial, so we will wrap it
into a function called prepData_NAGaps.
# define function to replace step and turn angle of large gaps with NA
NA_gaps <- function(tracks, times){
# rows where location is within a large gap
rows <- which(
rowSums(apply(times, 1, function(X, tracks){
dplyr::between(tracks,
as.POSIXct(X[1], tz = "UTC"),
as.POSIXct(X[2], tz = "UTC"))
}, tracks$time))==1)
tracks$step[rows] <- NA
tracks$angle[rows] <- NA
return(tracks)
}
# define function to identify and nullify gaps
prepData_NAGaps <- function(track_list, tracks_crw, res, max_gap, ...){
# for each ID, identify which rows have gaps >= max_gap
gaps_ls_rows <- lapply(track_list, function(x){
which(difftime(lead(x$time), x$time, units = "mins") >= max_gap)
})
# create sequence of times for each track from gaps >= 60 min
gap_ls <- mapply(FUN = function(track, gaps){
# identify start and end date of each gap
gaps_ls_srt_end <- list(start = ceiling_date(track$time[gaps], paste(res, "min")),
end = floor_date(track$time[gaps+1], paste(res, "min")))
# combine into single vector for each track
data.frame(start = gaps_ls_srt_end$start, end = gaps_ls_srt_end$end)
},
track_list, gaps_ls_rows, SIMPLIFY = F)
# prep data and list by ID
prep_tracks <- prepData(tracks_crw, ...) %>%
{split(., .$ID)}
# Interpolate the location at the times from the sequence
mapply(FUN = NA_gaps, prep_tracks, gap_ls,
SIMPLIFY = F) %>%
do.call("rbind", .) # convert list of tracks back to a single data.frame
}
prep_tracks_gps_crw_NAgaps <- prepData_NAGaps(tracks_gps_ls, tracks_gps_crw, 10, 60, type = "UTM")Now, let’s try to fit the same HMM as above to this data with large gaps voided.
set.seed(1)
# Setting up the starting values
mu0 <- c(50, 500) # Mean step length
sigma0 <- c(50, 500) # Sd of the step length
kappa0 <- c(0.1, 1) # Turning angle concentration parameter (kappa > 0)
# Fit a 2 state HMM
HMM_gps_crw_NAgaps <- fitHMM(prep_tracks_gps_crw_NAgaps, nbState = 2, dist = list(step = "gamma", angle = "vm"), Par0 = list(step = c(mu0, sigma0), angle = kappa0))
plot(HMM_gps_crw_NAgaps, ask = FALSE)## Decoding state sequence... DONE
Visually, the difference is subtle.
## $step
## state 1 state 2
## mean 258.1724 705.2750
## sd 158.3769 255.9088
##
## $angle
## state 1 state 2
## mean 0.000000 0.00000
## concentration 2.706698 23.60863
## $step
## state 1 state 2
## mean 282.3941 757.6624
## sd 156.3310 248.9599
##
## $angle
## state 1 state 2
## mean 0.000000 0.00000
## concentration 2.476524 19.03249
However, the estimated parameters are quite different whether you account for the large gaps in data. When you void large gaps, the mean step length for both states is higher, and the turn angle concentration parameters is lower for both states (i.e., more tortuous). The fact that the parameters change for both states, suggests that the large gaps skewed the parameterisation of both states.
Another strategy to deal with larger gaps is to segment the tracks with a new individual ID. This may be particularly appropriate for gaps where we may reasonably expect that the the underlying states are effectively independent of one another. Specifically, we may ask, over what period of time does the behaviour of the animal affect the subsequent behaviour. In this case, we may expect that the behaviour of a narwhal depends on the behaviour over the proceeding several hours, however is independent after 24 hours. We can split the tracks for gaps larger than a predetermined threshold by iterating the ID column. We will not implement this approach in this tutorial, however, it can be done using the following code:
gap_thresh <- 3*60 # in hours (3h for illustration)
# gaps no more than 6h per segment
new_ID <- (tracks_gps$dt > gap_thresh | tracks_gps$ID != lag(tracks_gps$ID)) %>% # if dif.time > gap_thresh or new ID
replace_na(TRUE) %>% # replace first NA with ID = 1
cumsum() %>% # iterate ID
paste(tracks_gps$ID, ., sep = "_")
# then smaller gaps <= gap_thresh can be interpolated with crawlWrapThus far, we have been using exclusively the GPS data, which can be
assumed to have a negligible error in most applications. However, in
many marine systems, we may obtain ARGOS locations, which can have
highly variable location error on the order of hundreds of meters to
hundreds of km. In addition, ARGOS data often provides location
estimates in very irregular time intervals. crawlWrap can
be used not only to predict impute missing data, but also to predict
location when faced with high location error and irregular
time-series.
# filter argos data
tracks_argos <- filter(tracks_proj, loc_class != "GPS") %>%
# calculate dime difference
mutate(dt = ifelse(ID == lead(ID),
difftime(lead(time), time, units = "mins"), NA),
# define location class factor levels
loc_class = factor(loc_class,
levels = c(3, 2, 1, 0, "A", "B")))Note: It is possible to include both GPS and ARGOS data in one model (you would define the levels as such: `levels = c(“GPS”, 3, 2, 1, 0, “A”, “B”)’. However, for this example, we will illustrate the more difficult scenario using only ARGOS data and use the GPS data as a measure of model performance. Visualize ARGOS tracks.
# convert to lines and plot using tmap
tracks_argos %>%
group_by(ID) %>%
summarise(do_union = FALSE) %>%
st_cast("LINESTRING") %>%
tm_shape() +
tm_sf(col = "ID", palette = "Dark2")Notice how much more noise there is in this data compared to the gps
tracks. In addition, it looks like there are at least several outlier
points. We can address most of these using a coarse speed filter.
However, before we can calculate speed (\(\frac{\Delta dist}{\Delta t}\)), we must
address the duplicate time-stamps (since we cannot divide by 0 to
calculate speed). It is often not possible to reliably identify which
record is more accurate to discard. Therefore, it’s best to retain as as
much data and offset duplicate times slightly. We will first offset
duplicate times by adding 10s for each consecutive row with a
dt == 0.
# 1) count the run length of each time step using rle
run_length <- rle(tracks_argos$dt)$lengths
# 2) create a sequence of 1:run_length for each time step
run_seqence <- sequence(run_length)
# 3) discount each time step that's not a duplicate by multiplying by 0
run_seqence <- run_seqence*(tracks_argos$dt==0)
# 4) lag run_sequence by one row and replace the first value by 0,
# so that we iterate each time that is a duplicate of the previous row
tracks_argos$duplicate_count <- replace_na(lag(run_seqence), 0)
# offset duplicate times by 10s
tracks_argos <-
tracks_argos %>%
mutate(time = time + 10*duplicate_count,
# recalculate time difference
dt = ifelse(ID == lead(ID),
difftime(lead(time), time, units = "mins"), NA
))
# last we ensure there are no identical times
sum(tracks_argos$time == lead(tracks_argos$time), na.rm = TRUE)## [1] 0
Now, we can apply a speed filter to remove locations that are faster than some threshold – in this case we will exclude any locations \(>10\) m/s from the previous location.
# distance from previous location
delta_dist <- st_distance(tracks_argos, lag(tracks_argos), by_element=TRUE)
# time from previous location
delta_t <- set_units(lag(tracks_argos$dt), "min")
# calculate speed
tracks_argos$spd <- delta_dist/delta_t
# filter speed
tracks_argos <- tracks_argos %>%
filter(spd < set_units(10, "m/s"))
# convert to lines and plot
tracks_argos %>% group_by(ID) %>%
summarise(do_union = F) %>%
st_cast("LINESTRING") %>%
tm_shape()+
tm_lines(col = "ID", palette = "Dark2")Already looking much better.
location accuracy (ignoring duplicate locations)
# identify proportion of each location class
tracks_argos %>%
filter(dt > 0) %>%
{table(.$loc_class)/nrow(tracks_argos)}##
## 3 2 1 0 A B
## 0.08490566 0.16603774 0.21698113 0.14905660 0.14528302 0.23396226
we have to deal with location error, irregular time, and missing
locations. The crawl model that we fit earlier can be
extended to predict tracks from irregular data with location error. In
this section, we will cover using crawlWrap to amputate
regular locations when faced with more difficult data (e.g., ARGOS) and
fitting an HMM to multiple imputed data.
First let’s identify a resolution to work at if only obtained ARGOS data.
# visualise time differences
hist(tracks_argos$dt[tracks_argos$dt<180], 32,
main = NA, xlab = "Time difference (min)")# summarise track dt
tracks_argos %>%
st_drop_geometry() %>%
group_by(ID) %>%
summarise(p_NA_10m = p_na(time, n_loc, 10), # 10 min
p_NA_20m = p_na(time, n_loc, 20), # 20 min
p_NA_30m = p_na(time, n_loc, 30), # 30 min
p_NA_45m = p_na(time, n_loc, 45), # 45 min
p_NA_60m = p_na(time, n_loc, 60)) # 60 min ## # A tibble: 3 × 6
## ID p_NA_10m p_NA_20m p_NA_30m p_NA_45m p_NA_60m
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 T172062 0.789 0.651 0.541 0.290 0.327
## 2 T172064 0.843 0.722 0.632 0.415 0.449
## 3 T172066 0.859 0.751 0.675 0.512 0.513
# p_NA_60m = p_na(time, n_loc, 90), # 90 min
# p_NA_60m = p_na(time, n_loc, 120)) # 120 min In this section, we will use 45 min as a trade-off between higher resolution and minimising data gaps.
First, we will split the ARGOS tracks into a list by ID, and identify the times in which to predict locations.
# split ARGOS tracks into list by ID
tracks_argos_ls <- tracks_argos %>%
mutate(x = st_coordinates(tracks_argos)[,"X"],
y = st_coordinates(tracks_argos)[,"Y"]) %>%
st_drop_geometry() %>%
split(.,.$ID)
# define times to predict for easier comparison between ARGOS and GPS tracks
predTime <- lapply(tracks_argos_ls, function(x)
seq(first(x$time), last(x$time), "45 min"))Next, we will define the error model for crawl, which
describes the longitudinal and latitudinal location error associated
with the location quality classes.
# define error model
err.model <- list(x = ~ loc_class - 1, y = ~ loc_class - 1)The crawl tutorial suggests that users define prior
distributions to optimize the fit and increase the model’s ability to
converge with limited/challenging data while providing the model
flexibility. Here, we provide a normal distribution of the
log-transformed error for each of the ARGOS error classes. The tutorial
also suggest users rely on the prior distribution for the beta parameter
centered on -4 (smoother fit) and, if needed, fix the beta parameter to
-4 (see crawl tutorial HERE).
# define priors
prior <- function(p) {
dnorm(p[1], log(250), 0.2, log = TRUE) + # 3
dnorm(p[2], log(500), 0.2, log = TRUE) + # 2
dnorm(p[3], log(1500), 0.2, log = TRUE) + # 1
dnorm(p[4], log(2500), 0.4, log = TRUE) + # 0
dnorm(p[5], log(2500), 0.4, log = TRUE) + # A
dnorm(p[6], log(2500), 0.4, log = TRUE) + # B
# skip p[7] as we won't provide a prior for sigma
dnorm(p[8], -4, 3, log = TRUE) # beta parameter
}Fit crawl model to ARGOS tracks
# fit crawl model and predict locations at 45 min time steps
set.seed(1)
crwOut_argos_45 <- crawlWrap(tracks_argos,
timeStep = "45 min",
retryFits = 1, Time.name = "time",
err.model = err.model,
prior = prior,
predTime = predTime)## Fitting 3 track(s) using crawl::crwMLE...
## DONE
##
##
## Predicting locations (and uncertainty) at 45 mins time steps for 3 track(s) using crawl::crwPredict... DONE
# check that each track converged (0 = converged)
sapply(crwOut_argos_45$crwFits, function(x)x$convergence)## T172062 T172064 T172066
## 0 0 0
plot(crwOut_argos_45, ask = FALSE)# convert to data.frame and extract predicted locations
crw_argos_45 <- data.frame(crwOut_argos_45$crwPredict) %>%
filter(locType == "p") %>%
dplyr::select(mu.x, mu.y, time, ID, loc_class) %>%
dplyr::rename(x = "mu.x", y = "mu.y")For comparison, we will also use crawlWrap to
interpolate GPS data at a 45 min frequency/
# fit crawl model and predict locations at 45 min time steps for GPS data
set.seed(1)
crwOut_gps_45 <- crawlWrap(obsData = tracks_gps, timeStep = "45 mins",
predTime = predTime) ## Fitting 3 track(s) using crawl::crwMLE...
## DONE
##
## Predicting locations (and uncertainty) at 45 mins time steps for 3 track(s) using crawl::crwPredict... DONE
# check that each track converged (0 = converged)
sapply(crwOut_gps_45$crwFits, function(x)x$convergence)## T172062 T172064 T172066
## 0 0 0
# convert to data.frame and extract predicted locations
crw_gps_45 <- data.frame(crwOut_gps_45$crwPredict) %>%
filter(locType == "p") %>%
dplyr::select(mu.x, mu.y, time, ID) %>%
dplyr::rename(x = "mu.x", y = "mu.y")
# compare estimated tracks
filter(crw_argos_45, ID=="T172062") %>%
plot(y~x, data=., type = 'l', col = 'red', asp = 1)
filter(crw_gps_45, ID=="T172062") %>%
points(y~x, data=., type = 'l')We see that there is some over lap between the ARGOS (red) and GPS (black) tracks at a 45 min resolution. However, there are some sections where the ARGOS track cuts straight lines through sections where there is variability in the GPS track.
Let’s try to fit an HMM to the interpolated ARGOS tracks.
# prep tracks
prep_argos_45 <- prepData(crw_argos_45, type = "UTM")
prep_gps_45 <- prepData(crw_gps_45, type = "UTM")
# setup model
dist <- list(step = "gamma", angle = "vm")
# Setting up the starting values
mu0 <- c(1000, 3000) # mu step
sigma0 <- mu0 # Sd step
kappa0 <- c(1, 3) # kappa angle
Par0 <- list(step = c(mu0, sigma0), angle = kappa0)
# Fit a 2 state HMM for 45 min ARGOS and GPS tracks
set.seed(1)
HMM_argos_45_basic <- fitHMM(prep_argos_45, nbState = 2, dist = dist, Par0 = Par0)## Warning in fitHMM.momentuHMMData(prep_argos_45, nbState = 2, dist = dist, : ginv of the hessian failed -- Error in svd(X): infinite or missing values in 'x'
HMM_gps_45_basic <- fitHMM(prep_gps_45, nbState = 2, dist = dist, Par0 = Par0)
plot(HMM_argos_45_basic, animals = "T172062", ask = F)## Decoding state sequence... DONE
plot(HMM_gps_45_basic, animals = "T172062", ask = F)## Decoding state sequence... DONE
# compare predicted states
sum(viterbi(HMM_argos_45_basic) == viterbi(HMM_gps_45_basic))/nrow(prep_argos_45)## [1] 0.5168
The interpolated GPS track seems to fit, however the model had some
issues fitting the ARGOS data. The ARGOS model is clearly defining all
the interpolated segments as state 1 and the observed locations as state
2. Further, \(<50/%\) of the
predicted states from the ARGOS model are the same as the GPS model. I
suspect the issue may be due to the large gaps where the interpolated
track is basically linear. Indeed, in the GPS HMM, state 1 seems to
primarily be classifying the straight interpolated sections as state 1
and the rest as state 2. We can try to address this by voiding large
gaps using the prepData_NAGaps function we defined earlier.
In this case, we will voiding gaps \(>180\) min
# prep 45 min ARGOS data and void large gaps
prep_argos_45 <- prepData_NAGaps(track_list = tracks_argos_ls, tracks_crw = crw_argos_45, res = 45, max_gap = 180, type = "UTM")
# prep 45 min gps data and void large gaps
prep_gps_45 <- prepData_NAGaps(track_list = tracks_gps_ls, tracks_crw = crw_gps_45, res = 45, max_gap = 180, type = "UTM")Now, let’s try to fit the same HMM as above to this data with large gaps voided.
set.seed(1)
# Fit a 2 state HMM for 45 min ARGOS and GPS tracks
HMM_argos_45 <- fitHMM(prep_argos_45, nbState = 2, dist = dist, Par0 = Par0)
HMM_gps_45 <- fitHMM(prep_gps_45, nbState = 2, dist = dist, Par0 = Par0)
plot(HMM_argos_45, animals = "T172062", ask = "F")## Decoding state sequence... DONE
plot(HMM_gps_45, animals = "T172062", ask = "F")## Decoding state sequence... DONE
# compare new and original GPS model predicted states
sum(viterbi(HMM_gps_45_basic) == viterbi(HMM_gps_45))/nrow(prep_argos_45)## [1] 0.936
# compare new and original GPS model predicted states
sum(viterbi(HMM_argos_45) == viterbi(HMM_gps_45))/nrow(prep_argos_45)## [1] 0.7376
Both data sets seem to fit easily. There isn’t a large change between this GPS model and the previous one (without voiding large gaps); \(94/%\) of the predicted states are the same. However, the overlap between the ARGOS and GPS models is much higher than previously (\(74\%\)), suggesting the ARGOS model with voided gaps is significantly improved. However, there is still a notable discrepancy between the ARGOS and the GPS HMMs.
—– WORK IN PROGRESS —— first, fit MI to GPS data for starting parameters
# fit crawl to 1 gpstrack and predict locations at 45 min time stepstracks <-
cwr_gps_sub <- tracks_gps %>%
filter(ID == "T172062")
predTime <- lapply(split(cwr_gps_sub,cwr_gps_sub$ID), function(x)
seq(first(x$time), last(x$time), "45 min"))
# fit crawl
set.seed(1)
crwOut_gps_T172062 <-
crawlWrap(cwr_gps_sub,
retryFits = 1, Time.name = "time",
predTime = predTime)## Fitting 1 track(s) using crawl::crwMLE...
## DONE
##
##
## Predicting locations (and uncertainty) at 45 mins time steps for 1 track(s) using crawl::crwPredict... DONE
crwOut_gps_T172062$crwFits$T172062##
##
## Continuous-Time Correlated Random Walk fit
##
## Models:
## --------
## Movement ~ 1
## Error
##
##
## Parameter Est. St. Err. 95% Lower 95% Upper
## ln sigma (Intercept) 8.581 0.093 8.399 8.763
## ln beta (Intercept) -0.248 0.101 -0.447 -0.050
##
##
## Log Likelihood = -6491.78
## AIC = 12987.56
# define starting paraeters
mu0 <- c(1000, 3000) # mu step
sigma0 <- mu0 # sd step
kappa0 <- c(1, 3) # kappa angle
Par0 <- list(step = c(mu0, sigma0), angle = kappa0)
# fit multiple imputation HMM to subset to GPS tracks
nfsFits_gps <- MIfitHMM(crwOut_gps_T172062, nSims = 10,
nbStates = 2, dist = dist, Par0 = Par0, fit = T)## Drawing 10 realizations from the position process using crawl...
## DONE
## DONE
## Fitting 10 realizations of the position process using fitHMM...
## =======================================================================
## Fitting HMM with 2 states and 2 data streams
## -----------------------------------------------------------------------
## step ~ gamma(mean=~1, sd=~1)
## angle ~ vm(concentration=~1)
##
## Transition probability matrix formula: ~1
##
## Initial distribution formula: ~1
## =======================================================================
## DONE
## Warning in MIpool(fits, alpha = alpha, ncores = ncores, na.rm = na.rm): Hessian
## is singular for HMM fit(s): 5, 6, 10
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in MIpool(fits, alpha = alpha, ncores = ncores, na.rm = na.rm): working
## parameter standard errors are not finite for HMM fits 3, 8
## Decoding state sequences for each imputation...
## Decoding state probabilities for each imputation...
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Calculating location 95% error ellipses... DONE
plot(nfsFits_gps, ask = F)# subset
sub_argos <- tracks_argos %>%
filter(ID == "T172062") %>%
list()
# recalculate predTime
predTime <- lapply(sub_argos, function(x)
seq(first(x$time), last(x$time), "45 min"))crawl works best when there is either low location error
(e.g., GPS data), when the animal movement is significantly greater than
the location error, or there is a large amount of data. When any/all of
these criteria are met, we can provide priors as follows:
# define priors
prior <- function(p) {
dnorm(p[1], log(250), 0.2, log = TRUE) + # 3
dnorm(p[2], log(500), 0.2, log = TRUE) + # 2
dnorm(p[3], log(1500), 0.2, log = TRUE) + # 1
dnorm(p[4], log(2500), 0.4, log = TRUE) + # 0
dnorm(p[5], log(2500), 0.4, log = TRUE) + # A
dnorm(p[6], log(2500), 0.4, log = TRUE) + # B
# skip p[7] as we won't provide a prior for sigma
dnorm(p[8], -4, 0, log = TRUE) # beta parameter
}However, in our case, the location error is often larger than narwhals’ movement over a 45 min period, so we will actually fix the location error estimates, provide a prior for beta (a measure of autocorrelation in narwhal movement), and allow the model to estimate sigma without any priors.
# define priors
prior <- function(p) {
# skip p[1] as we won't provide a prior for sigma
dnorm(p[2], -4, 0.1, log = TRUE) # beta parameter
}calculate known error between argos and GPS to get Fix parameters for error model.
# combine GPS and ARGOS narwhal locations
data <- data.frame(ID = prep_gps_45$ID,
time = prep_gps_45$time,
x_gps = prep_gps_45$x, y_gps = prep_gps_45$y,
x_argos = prep_argos_45$x, y_argos = prep_argos_45$y) %>%
# calculate difference between GPS and ARGOS locations
mutate(#d_loc = sqrt((x_argos-x_gps)^2+(y_argos-y_gps)^2),
dx = abs(x_argos-x_gps), dy = abs(y_argos-y_gps)) %>%
setDT() # convert to data.table
# Select ARGOS columns to merge and convert track to data.table
tracks_argos_dt <-tracks_argos %>%
dplyr::select(ID, time, loc_class) %>%
setDT()
# set keys to join
setkey(data, ID, time)
setkey(tracks_argos_dt, ID, time)
# rolling join
data.df <- as.data.frame(tracks_argos_dt[data, roll = TRUE])
# void large gaps so simulated locations do not skew model error
# data.df$d_loc[is.na(prep_argos_45$step)] <- NA
data.df$dx[is.na(prep_argos_45$step)] <- NA
data.df$dy[is.na(prep_argos_45$step)] <- NA
# calculate mean x and y error for each location classusing summarise
data.df %>%
group_by(loc_class) %>% # group by location class
summarise(#mean_d_loc = mean(d_loc, na.rm = T),
#sd_d_loc = sd(d_loc, na.rm = T),
mean_dx = mean(dx, na.rm = T),
mean_dy = mean(dy, na.rm = T))## # A tibble: 6 × 3
## loc_class mean_dx mean_dy
## <fct> <dbl> <dbl>
## 1 3 767. 717.
## 2 2 969. 1343.
## 3 1 1200. 1150.
## 4 0 1831. 1666.
## 5 A 1811. 1448.
## 6 B 1972. 1882.
# define fixPar using abover
fixPar <- c(log(767), log(969), log(1200), # x 3,2,1
log(1831),log(1811),log(1972), # x 0,A,B
log(717), log(1343), log(1150),# y 3,2,1
log(1666),log(1448),log(1882), # y 0,A,B
NA, NA) # sgima, beta (to be estimated)fit crawl model to the first narwhal
# fit
set.seed(1)
argos_45_crwOut_1trk_fx <-
crawlWrap(tracks_argos_ls[[1]],
retryFits = 100, retrySD = 5,
Time.name = "time",
err.model = err.model,
prior = prior,
fixPar = fixPar,
predTime = predTime[[1]])## Fitting 1 track(s) using crawl::crwMLE...
## DONE
##
##
## Predicting locations (and uncertainty) at 45 mins time steps for 1 track(s) using crawl::crwPredict... DONE
# view
argos_45_crwOut_1trk_fx$crwFits$T172062##
##
## Continuous-Time Correlated Random Walk fit
##
## Models:
## --------
## Movement ~ 1
## Error ~loc_class - 1 ~loc_class - 1
##
##
## Parameter Est. St. Err. 95% Lower 95% Upper
## ln tau.x loc_class3 6.642 . . .
## ln tau.x loc_class2 6.876 . . .
## ln tau.x loc_class1 7.090 . . .
## ln tau.x loc_class0 7.513 . . .
## ln tau.x loc_classA 7.502 . . .
## ln tau.x loc_classB 7.587 . . .
## ln tau.y loc_class3 6.575 . . .
## ln tau.y loc_class2 7.203 . . .
## ln tau.y loc_class1 7.048 . . .
## ln tau.y loc_class0 7.418 . . .
## ln tau.y loc_classA 7.278 . . .
## ln tau.y loc_classB 7.540 . . .
## ln sigma (Intercept) 11.788 0.136 11.521 12.054
## ln beta (Intercept) -3.962 0.101 -4.161 -3.764
##
##
## Log Likelihood = -3843.848
## AIC = 7691.695
plot(argos_45_crwOut_1trk_fx)fit multiple imputation HMM
# nfsFits <- MIfitHMM(argos_45_crwOut_1trk, nSims = 1, fit = F)
set.seed(1)
MIfit_argos <- MIfitHMM(argos_45_crwOut_1trk_fx, nSims = 100, fit = T,
nbStates = 2, dist = dist, ncores = 10,
Par0 = getPar0(nfsFits_gps)$Par)## Drawing 100 realizations from the position process using crawl...
## Fitting 100 realizations of the position process using fitHMM...
## Fitting 100 imputations in parallel...
## Warning in MIpool(fits, alpha = alpha, ncores = ncores, na.rm = na.rm): Hessian
## is singular for HMM fit(s): 3, 23, 42, 66, 78, 90, 96
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in MIpool(fits, alpha = alpha, ncores = ncores, na.rm = na.rm): working
## parameter standard errors are not finite for HMM fits 2, 7, 9, 14, 17, 19, 20,
## 25, 28, 41, 43, 46, 56, 57, 60, 62, 67, 69, 71, 73, 76, 77, 81, 88, 93, 94, 100
## Decoding state sequences for each imputation...
## Decoding state probabilities for each imputation...
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Calculating location 95% error ellipses... DONE
#pool results
MIpool_argos <- MIpool(MIfit_argos[[2]])## Warning in MIpool(MIfit_argos[[2]]): Hessian is singular for HMM fit(s): 3, 23,
## 42, 66, 78, 90, 96
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): NaNs produced
## Warning in MIpool(MIfit_argos[[2]]): working parameter standard errors are not
## finite for HMM fits 2, 7, 9, 14, 17, 19, 20, 25, 28, 41, 43, 46, 56, 57, 60, 62,
## 67, 69, 71, 73, 76, 77, 81, 88, 93, 94, 100
## Decoding state sequences for each imputation...
## Decoding state probabilities for each imputation...
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Warning in sqrt(diag(x)): NaNs produced
## Calculating location 95% error ellipses... DONE
# plot pooled data
plot(MIpool_argos, ask = F)sum(MIpool_argos$Par$states == viterbi(HMM_gps_45)[which(HMM_gps_45$data$ID=="T172062")])/nrow(sub_argos[[1]])## [1] 0.7040359
\(70\%\) overlap in predicted states ## Integrating dive data
Let’s import the dive data and explore it.
dives <- read.csv("data/dives.csv") %>%
mutate(time = ymd_hms(time)) %>%
filter(month(time) == 8, day(time) > 7, day(time) <= 14)
head(dives)## ID time depth dt
## 1 T172062 2017-08-08 00:00:00 1.0 1.25
## 2 T172062 2017-08-08 00:01:15 7.5 1.25
## 3 T172062 2017-08-08 00:02:30 11.0 1.25
## 4 T172062 2017-08-08 00:03:45 5.5 1.25
## 5 T172062 2017-08-08 00:05:00 3.5 1.25
## 6 T172062 2017-08-08 00:06:15 1.0 1.25
##
## 1.25 61.25 121.25 181.25 241.25 301.25 361.25
## 18547 52 16 6 1 1 1
It appears as though there are relatively few gaps, however the gaps that exist are relatively long (> 60 mins). Therefore, I suggest that we should regularize the dive data and void large gaps.
# regularise data
dives <- dives %>%
group_by(ID) %>%
# regularise time series by 1.25 min
summarise(time = seq(first(time), last(time), by = 1.25*60)) %>%
# merge regularised time with original dive
left_join(dives, by = c("ID", "time"))## `summarise()` has grouped output by 'ID'. You can override using the `.groups`
## argument.
Next, we have to summarise the time series into concrete data streams
that can be modelled in the HMM. We can use the diveMove
package to identify individual dives and calculate dive statistics. We
must first convert the depth data to the TDR class for each
whale. As we are working with multiple whales, we will use
lapply to apply the createTDR to each whale –
we will also use lapply for most of diveMove
functions.
dives_ls <- split(dives, dives$ID)
dive_TDR <- lapply(dives_ls, function(data){
createTDR(time = data$time, depth = data$depth, dtime = 1.25*60, file = "data/dives.csv")
})
# generate interactive plot
plotTDR(dive_TDR[[1]]) # note: try zooming in to part of the diveNext, we must use the calibrateDepth to calibrate the
depth data. This function identifies wet and dry periods (for animals
that haul out), applies a zero-offset correction (ZOC), and identifies
all dives in the record and their phases. ZOC can be done using either
the offset or filter method. In this case, we
will assume the depth data is accurate and does not require an offset.
We should also specify dive.thr, which represents the
threshold depth below which an underwater phase should be considered a
dive – in this case, we will set it at 8 m. There are many other
optional parameters to identify dives that we will not get into in this
tutorial (See vignette("diveMove") for details).
# calibrate TDR and identify dive phases
dive_TDR_calib <- lapply(X = dive_TDR, FUN = calibrateDepth, zoc.method = "offset", offset = 0, dive.thr = 8)## Record is truncated at the beginning and at the end
## 39 phases detected
## 818 dives detected
## Record is truncated at the beginning and at the end
## 75 phases detected
## 484 dives detected
## Record is truncated at the beginning and at the end
## 41 phases detected
## 628 dives detected
In the interactive plot, try zooming into one of the dives and hover mouse over plot to preview the phases identified by `calibrateDepth’. The phases identified correspond to descent (D), descent/bottom (DB), bottom (B), bottom/ascent (BA), ascent (A), and surface (X). We can plot a specific dives as follows:
Now, we can calculate summary statistics for each dive using the
function diveStats. There are many dive metrics that are
estimated, and which ones to retain are species, data, and question
specific. In this case, we will retain 8 from those calculated by
diveStats: dive time, bottom time, maximum depth, bottom
distance (measure of “wiggling while at the bottom), post-dive
duration.
# calculate dive stats and add dive.id to each dive
dive_sum <- lapply(dive_TDR_calib, function(data){
mutate(diveStats(data, depth.deriv = FALSE), dive.id = row_number()) %>%
dplyr::select(dive.id, divetim, botttim, maxdep, bottdist, postdive.dur)}) # select variables of interest
# add dive.id with depth data to each depth record
dives_ls <- mapply(function(TDR, dives){
mutate(TDR, dive.id = dives@dive.activity$dive.id)
}, TDR = dives_ls, dives = dive_TDR_calib, SIMPLIFY = F)
# join TDR data with dive summary data
dives_ls <- mapply(function(TDR, dive_sum){
left_join(TDR, dive_sum, by = "dive.id")
}, TDR = dives_ls, dive_sum = dive_sum, SIMPLIFY = F)
# convert dive_ls back to a data.frame
dives_df <- do.call(rbind, dives_ls)Next, we will replace any bottom time of a valid dive (dive.id > 0) to 75 s, since at least some time must be spent at the bottom. Then, we will calculate one additional metric as a proxy for dive shape; the ratio of bottom time to dive time. Dives where \(\leq20\%\) of the time is spent at the bottom represent V-shaped dives, U-shaped dives are represented when \(>20\) and \(\leq50\%\) is spent at the bottom, and square dives are represented when \(>50\%\) of the time is spent at the bottom.
# replace NA bottom time of valid dives
dives_df$botttim[dives_df$dive.id > 0 & is.na(dives_df$botttim)] <- 75
# calculate proportion time at bottom
dives_df <- dives_df %>%
mutate(propbott = botttim/divetim)
# remove "dives" with no duration
dives_df <- dives_df %>%
filter(!(dive.id > 0 & is.na(divetim)))The next issue is that the dive data is at a different temporal resolution (75 s) than the location data (10 min). There are two options to include both data streams in the same HMM. First, we can choose to implement a hierarchical HMM, however, this is more complicated, and will be covered in tomorrow’s tutorial. The second, which we will use here, is to summarise the depth/dive data to a 10 min resolution and include them as additional data streams with step length and turning angle.
dives_sum <- dives_df %>%
# round time to same interval as location data
mutate(time = floor_date(time, "10 min")) %>%
# group rows by time interval
group_by(ID, time) %>%
# summarise data
summarise(NA_t = sum(is.na(depth))*1.25,
surf_t = sum(dive.id == 0)*1.25,
mean_dep = ifelse(NA_t == 10, NA, mean(na.rm = T, depth)),
max_dep = ifelse(NA_t == 10, NA, max(na.rm = T, depth)),
dive_t = ifelse(NA_t == 10, NA, sum(na.rm = T, divetim)),
bott_t = ifelse(dive_t < 5, NA, sum(na.rm = T, botttim)),
prop_bott = ifelse(dive_t < 5, NA, max(na.rm = T, propbott)),
max_dive_dep = ifelse(dive_t < 5, NA, max(na.rm = T, maxdep)),
bott_dist = ifelse(dive_t < 5, NA, max(na.rm = T, bottdist)),
post_dive_dur = ifelse(dive_t < 5, NA, max(na.rm = T, postdive.dur))) %>%
# remove -Inf values (typically error)
filter(!is.infinite(dive_t) & !is.infinite(bott_dist))## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## Warning in max(na.rm = T, bottdist): no non-missing arguments to max; returning
## -Inf
## `summarise()` has grouped output by 'ID'. You can override using the `.groups`
## argument.
## # A tibble: 6 × 12
## # Groups: ID [1]
## ID time NA_t surf_t mean_dep max_dep dive_t bott_t
## <chr> <dttm> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 T172062 2017-08-08 00:10:00 0 3.75 17.3 31 1875 375
## 2 T172062 2017-08-08 00:20:00 0 6.25 7.56 20 1875 1125
## 3 T172062 2017-08-08 00:30:00 0 0 109. 123 7200 4200
## 4 T172062 2017-08-08 00:40:00 0 5 20.9 88 2100 1200
## 5 T172062 2017-08-08 00:50:00 0 5 9.94 24.5 1275 450
## 6 T172062 2017-08-08 01:00:00 0 1.25 53.2 76.5 4200 1575
## # … with 4 more variables: prop_bott <dbl>, max_dive_dep <dbl>,
## # bott_dist <dbl>, post_dive_dur <dbl>
Note, we used 2 different ifelse statements. First,
ifelse(NA_t == 10, ...) ensured that we only calculate mean
and maximum depth for periods where we have at least 1.25 min of depth
data. Second, ifelse(surf_t > 5, ...) ensured that we
only calculate dive metrics when at least half of the 10 min interval is
spent in a dive, otherwise the animal is assumed to be at the surface
and its dive metric is NA.
Any of the eight variables can be used as data streams in the HMM, however, including too many would significantly increase the number of parameters to estimate, and consequently computation time. Therefore, we must select which variables to use, which there are different several approaches: 1. Variable can be selected using expert on the species behaviour and research question. 2. We preferably want to avoid variable with a lot of missing data. 3. The data streams should exhibit variation and evidence of clustering or multi-modality that may be tied to the underlying behaviours. 4. Data streams with no variation or conversely are very noisy contain little information on underlying behaviour. 5. We want to avoid variables that are overdispersed and which would be difficult to describe using a statistical distribution. 6. Given that they are tied to autocorrelated behaviours, they too should exhibit some – but not too much – autocorrelation. 7. One of HMM’s key assumptions that the data stream are be independent of each other, therefore, we should avoid select highly co-linear data streams.
Assuming all the variables are biologically relevant, let’s look at
structure in the data. I will also add a subjective score based on the
results (first number after # symbol).
vars <- c("mean_dep","max_dep","dive_t","bott_t","prop_bott",
"max_dive_dep","bott_dist","post_dive_dur")
# First, missing data
dives_sum %>%
summarise(mean_dep = sum(is.na(mean_dep)), # 2
max_dep = sum(is.na(max_dep)), # 2
dive_t = sum(is.na(dive_t)), # 2
bott_t = sum(is.na(bott_t)), # 1
prop_bott = sum(is.na(prop_bott)), # 1
max_dive_dep = sum(is.na(max_dive_dep)), # 1
bott_dist = sum(is.na(bott_dist)), # 1
post_dive_dur = sum(is.na(post_dive_dur))) # 1## # A tibble: 3 × 9
## ID mean_dep max_dep dive_t bott_t prop_bott max_dive_dep bott_dist
## <chr> <int> <int> <int> <int> <int> <int> <int>
## 1 T172062 120 120 120 216 216 216 216
## 2 T172064 366 366 366 476 476 476 476
## 3 T172066 210 210 210 359 359 359 359
## # … with 1 more variable: post_dive_dur <int>
# Second, evidence of multi-modality, balanced variation, and no over-dispersion
# as the dive variables are zero-bound, applying a log transformation makes it easier to see structure in the distirbution
# dives_sum %>%
# ungroup() %>%
# dplyr::select(vars) %>%
# apply(2, hist, 100)
hist(log(dives_sum$mean_dep), 100) # 2 good structure, high dispersion## Warning in logit(dives_sum$prop_bott): proportions remapped to (0.025, 0.975)
# Third, balanced autocorrelation
dives_sum_filter <- filter(dives_sum, ID == "T172062")
acf(dives_sum_filter$mean_dep, na.action = na.pass) # 1 ACF~8 some variability# subjective score
data.frame(var = c("mean_dep", "max_dep", "dive_t", "bott_t", "prop_bott",
"max_dive_dep", "bott_dist", "post_dive_dur"),
score = c(5, 6, 5, 1,
1, 4, 4, 1))## var score
## 1 mean_dep 5
## 2 max_dep 6
## 3 dive_t 5
## 4 bott_t 1
## 5 prop_bott 1
## 6 max_dive_dep 4
## 7 bott_dist 4
## 8 post_dive_dur 1
From my subjective interpretation of these outputs, I think the five most promising variables to include are maximum depth, dive time, mean depth, maximum dive depth, and bottom distribution. Next, let’s merge the dive data streams the location data.
tracks_dives <- left_join(prep_tracks_gps_crw_NAgaps,
dives_sum[,c("ID", "time", "max_dep", "dive_t",
"mean_dep", "max_dive_dep", "bott_dist")],
by = c("ID", "time"))Now, we must check for co-linearity between the data streams to select 1–3 to use in the HMM. We can check co-linearity using the Pearson’s correlation matrix.
# calculate and plot check Paerson's correlation matrix
tracks_dives[,c("max_dep", "dive_t", "mean_dep", "max_dive_dep",
"bott_dist", "step", "angle")] %>%
na.omit() %>%
cor() %>%
corrplot(method="number")step and angle seem quit independent from
all variables as does bott_dist. Unfortunately, there is
quite high correlation between the first four dive data streams, se we
will have to select which to use. dive_t and
mean_dep have the lowest correlation with
bott_dist and step, however
max_dep had the highest subjective score. For the purposes
of this tutorial, I will use max_dep and
bott_dist.
To get starting parameters, we can fit an HMM to each one independently. We will use the gamma distribution for both and for now will use the same starting parameters.
# identify whether there is 0 data
tracks_dives %>%
summarise(max_dep = sum(max_dep == 0, na.rm = T),
bott_dist = sum(bott_dist == 0, na.rm = T)) ## max_dep bott_dist
## 1 0 628
# therefore, we need to include zero-mass parameters for bott_dist
# starting parameters (will use same ones for both for now)
mu0 <- c(130, 180) # mean
sigma0 <- c(60, 90) # sd
zm <- c(0.1, 0.1) # zero mass, where applicable
# fit dive-only HMMs
set.seed(1)
HMM_max_dep <- fitHMM(tracks_dives, nbStates = 2, dist = list(max_dep = "gamma"),
Par0 = list(max_dep = c(mu0, sigma0)))
HMM_bott_dist <- fitHMM(tracks_dives, nbStates = 2, dist = list(bott_dist = "gamma"),
Par0 = list(bott_dist = c(mu0, sigma0, zm)))Next, we can integrate these into one HMM together with step length and turning angle. When we specify the starting parameters, we want to think about how the states may look. For example, we might expect one resident state with slower movement, lower angular concentration, deeper dives, and greater at-depth variability. The second state may be travel, with faster movement, greater angular concentration, shallower dives, and less at-depth variability.
# prep model
stateNames = c("resident", "travel")
nbStates = length(stateNames)
dist = list(step = "gamma", angle = "vm",
max_dep = "gamma", bott_dist = "gamma")
# Starting Pars
# view Pars from previous HMMs
getPar(HMM_gps_crw_NAgaps)$Par # state 1 ~ resident, state 2 ~ travel## $step
## [1] 282.3941 757.6624 156.3310 248.9599
##
## $angle
## [1] 2.476524 19.032493
## $max_dep
## [1] 30.15317 407.61829 29.05073 163.55047
## $bott_dist
## [1] 23.9091357 315.3866816 24.7190599 203.5173933 0.2223528 0.5215131
# therefore must switch parameter estimates in HMM_max_dep & HMM_bott_dist
# combine starting Pars
step0 <- getPar(HMM_gps_crw_NAgaps)$Par$step
angle0 <- getPar(HMM_gps_crw_NAgaps)$Par$angle
max_dep0 <- c(getPar(HMM_max_dep)$Par$max_dep[c(2,1)], # mu1, mu2
getPar(HMM_max_dep)$Par$max_dep[c(4,3)]) # sd1, sd2
bott_dist0 <- c(getPar(HMM_bott_dist)$Par$bott_dist[c(2,1)], # mu1, mu2
getPar(HMM_bott_dist)$Par$bott_dist[c(4,3)], # sd1, sd2
getPar(HMM_bott_dist)$Par$bott_dist[c(6,5)]) # zm1, zm2
Par0 <- list(step = step0, angle = angle0, max_dep = max_dep0, bott_dist = bott_dist0)Now, we can fit the HMM with movement and dive variables.
set.seed(1)
HMM_move_dive <- fitHMM(tracks_dives, nbStates=nbStates, stateNames=stateNames, dist=dist, Par0=Par0)Let’s see what it looks like.
## Decoding state sequence... DONE
## Computing pseudo-residuals...
The tracks look interesting. But there are some issues in the pseudo residuals.
Looking at the QQ plots the model appears to: - overestimate the
number of fast movement steps - underestimating higher turning angles -
underestimate shallow dives and over estimate deep dives - really wonky
description of bott_dist The autocorrelation functions
suggest there is remnant autocorrelation in step and
max_dep that are not well described by the model. Together,
this information suggests that there may be additional states that are
not represented. Let’s try to add one more state with an intermediate
speed, lower angle concentration, and shallow dives.
# define states
stateNames <- c("resident", "travel", "search")
nbStates <- length(stateNames)
# Starting Pars
# get Pars from last HMM_move_dive HMM
Pars <- getPar(HMM_move_dive)$Par
# combine starting Pars
step0 <- c(Pars$step[c(1,2)], mean(Pars$step[c(1,2)]), # mu
Pars$step[c(3,4)], mean(Pars$step[c(3,4)])) # sd
angle0 <- c(Pars$angle, 3)
max_dep0 <- c(Pars$max_dep[c(1,2)], 25, # mu
Pars$max_dep[c(3,4)], 10) # sd
bott_dist0 <- c(Pars$bott_dist[c(1,2)], mean(Pars$bott_dist[c(1,2)]), # mu
Pars$bott_dist[c(3,4)], mean(Pars$bott_dist[c(3,4)]), # sd
Pars$bott_dist[c(5,6)], mean(Pars$bott_dist[c(5,6)]))# zm
Par0 <- list(step = step0, angle = angle0, max_dep = max_dep0, bott_dist = bott_dist0)
# fit 3-state HMM
set.seed(1)
HMM_move_dive_3s <- fitHMM(tracks_dives, nbStates=nbStates, stateNames=stateNames, dist=dist, Par0=Par0)## Computing pseudo-residuals...
## Decoding state sequence... DONE
Interestingly, the step and angle QQ-plots
were not improved much, though the step ACF was improved.
However, compared to the 2-state model, there was a drastic improvement
in QQ-plot and ACF the max_dep data stream and marginal
improvement in the bott_dist data stream. The newly
described state appears to have very low step length, really wide
turning angle, minimal diving, and minimal at-depth activity, which may
actually be indicative of resting. The “resident” state has intermediate
speed and turning angle, but very deep dives and high at-depth activity,
suggesting it may represent foraging. Let’s try to fit one more 4-state
HMM, to try and address the remaining residuals: intermediate speed,
lower angle concentration, and more intermediate dives, which may
represent searching behaviour.
# define states
stateNames <- c("forage", "travel", "rest", "search")
nbStates <- length(stateNames)
# Starting Pars
# get Pars from last HMM_move_dive HMM
Pars <- getPar(HMM_move_dive_3s)$Par
# combine starting Pars
step0 <- c(Pars$step[c(1:3)], mean(Pars$step[3]), # mu
Pars$step[c(4:6)], mean(Pars$step[6])) # sd
angle0 <- c(Pars$angle, Pars$angle[3])
max_dep0 <- c(Pars$max_dep[c(1:2)], Pars$max_dep[3]/2, Pars$max_dep[3], # mu
Pars$max_dep[c(4:5)], Pars$max_dep[6]/2, Pars$max_dep[6]) # sd
bott_dist0 <- c(Pars$bott_dist[c(1:3)], Pars$bott_dist[3], # mu
Pars$bott_dist[c(4:6)], Pars$bott_dist[6], # sd
Pars$bott_dist[c(7:9)], Pars$bott_dist[9]) # zm
Par0 <- list(step = step0, angle = angle0, max_dep = max_dep0, bott_dist = bott_dist0)
# fit 4-state HM
set.seed(1)
HMM_move_dive_4s <- fitHMM(tracks_dives, nbStates=nbStates, stateNames=stateNames, dist=dist, Par0=Par0)## =======================================================================
## Fitting HMM with 4 states and 4 data streams
## -----------------------------------------------------------------------
## step ~ gamma(mean=~1, sd=~1)
## angle ~ vm(concentration=~1)
## max_dep ~ gamma(mean=~1, sd=~1)
## bott_dist ~ gamma(mean=~1, sd=~1, zeromass=~1)
##
## Transition probability matrix formula: ~1
##
## Initial distribution formula: ~1
## =======================================================================
## DONE
## Computing pseudo-residuals...
## Decoding state sequence... DONE
The QQ-plot for step is quite a bit improved, as is for
bott_dist. The ACF for step is also quite
improved. At this point, I’m not sure the model can be significantly
improved with the existing data. Although HMMs with different number of
states generally shouldn’t be compared, since AIC generally favours more
states, it is a good sanity check.
## Model AIC
## 1 HMM_move_dive_4s 70831.17
## 2 HMM_move_dive_3s 71706.75
## 3 HMM_move_dive 73241.60
Argos data section - for now removed. We could do an extra tutorial.
–>
–>
–>
–>
–>